56 research outputs found

    Max vs Min: Tensor Decomposition and ICA with nearly Linear Sample Complexity

    Get PDF
    We present a simple, general technique for reducing the sample complexity of matrix and tensor decomposition algorithms applied to distributions. We use the technique to give a polynomial-time algorithm for standard ICA with sample complexity nearly linear in the dimension, thereby improving substantially on previous bounds. The analysis is based on properties of random polynomials, namely the spacings of an ensemble of polynomials. Our technique also applies to other applications of tensor decompositions, including spherical Gaussian mixture models

    Optimal Convergence Rate of Hamiltonian Monte Carlo for Strongly Logconcave Distributions

    Get PDF
    We study Hamiltonian Monte Carlo (HMC) for sampling from a strongly logconcave density proportional to e^{-f} where f:R^d -> R is mu-strongly convex and L-smooth (the condition number is kappa = L/mu). We show that the relaxation time (inverse of the spectral gap) of ideal HMC is O(kappa), improving on the previous best bound of O(kappa^{1.5}); we complement this with an example where the relaxation time is Omega(kappa). When implemented using a nearly optimal ODE solver, HMC returns an epsilon-approximate point in 2-Wasserstein distance using O~((kappa d)^{0.5} epsilon^{-1}) gradient evaluations per step and O~((kappa d)^{1.5}epsilon^{-1}) total time

    Convergence of Gibbs Sampling: Coordinate Hit-And-Run Mixes Fast

    Get PDF
    The Gibbs Sampler is a general method for sampling high-dimensional distributions, dating back to 1971. In each step of the Gibbs Sampler, we pick a random coordinate and re-sample that coordinate from the distribution induced by fixing all the other coordinates. While it has become widely used over the past half-century, guarantees of efficient convergence have been elusive. We show that for a convex body K in ?? with diameter D, the mixing time of the Coordinate Hit-and-Run (CHAR) algorithm on K is polynomial in n and D. We also give a lower bound on the mixing rate of CHAR, showing that it is strictly worse than hit-and-run and the ball walk in the worst case

    Contrastive Moments: Unsupervised Halfspace Learning in Polynomial Time

    Full text link
    We give a polynomial-time algorithm for learning high-dimensional halfspaces with margins in dd-dimensional space to within desired TV distance when the ambient distribution is an unknown affine transformation of the dd-fold product of an (unknown) symmetric one-dimensional logconcave distribution, and the halfspace is introduced by deleting at least an ϵ\epsilon fraction of the data in one of the component distributions. Notably, our algorithm does not need labels and establishes the unique (and efficient) identifiability of the hidden halfspace under this distributional assumption. The sample and time complexity of the algorithm are polynomial in the dimension and 1/ϵ1/\epsilon. The algorithm uses only the first two moments of suitable re-weightings of the empirical distribution, which we call contrastive moments; its analysis uses classical facts about generalized Dirichlet polynomials and relies crucially on a new monotonicity property of the moment ratio of truncations of logconcave distributions. Such algorithms, based only on first and second moments were suggested in earlier work, but hitherto eluded rigorous guarantees. Prior work addressed the special case when the underlying distribution is Gaussian via Non-Gaussian Component Analysis. We improve on this by providing polytime guarantees based on Total Variation (TV) distance, in place of existing moment-bound guarantees that can be super-polynomial. Our work is also the first to go beyond Gaussians in this setting.Comment: Preliminary version in NeurIPS 202
    • …
    corecore